14 research outputs found

    Martingales and the characteristic functions of absorption time on bipartite graphs

    Get PDF
    Evolutionary graph theory investigates how spatial constraints affect processes that model evolutionary selection, e.g. the Moran process. Its principal goals are to find the fixation probability and the conditional distributions of fixation time, and show how they are affected by different graphs that impose spatial constraints. Fixation probabilities have generated significant attention, but much less is known about the conditional time distributions, even for simple graphs. Those conditional time distributions are difficult to calculate, so we consider a close proxy to it: the number of times the mutant population size changes before absorption. We employ martingales to obtain the conditional characteristic functions (CCFs) of that proxy for the Moran process on the complete bipartite graph. We consider the Moran process on the complete bipartite graph as an absorbing random walk in two dimensions. We then extend Wald's martingale approach to sequential analysis from one dimension to two. Our expressions for the CCFs are novel, compact, exact, and their parameter dependence is explicit. We show that our CCFs closely approximate those of absorption time. Martingales provide an elegant framework to solve principal problems of evolutionary graph theory. It should be possible to extend our analysis to more complex graphs than we show here

    Martingales and the fixation time of evolutionary graphs with arbitrary dimensionality

    Get PDF
    Evolutionary graph theory (EGT) investigates the Moran birth– death process constrained by graphs. Its two principal goals are to find the fixation probability and time for some initial population of mutants on the graph. The fixation probability of graphs has received considerable attention. Less is known about the distribution of fixation time. We derive clean, exact expressions for the full conditional characteristic functions (CCFs) of a close proxy to fixation and extinction times. That proxy is the number of times that the mutant population size changes before fixation or extinction. We derive these CCFs from a product martingale that we identify for an evolutionary graph with any number of partitions. The existence of that martingale only requires that the connections between those partitions are of a certain type. Our results are the first expressions for the CCFs of any proxy to fixation time on a graph with any number of partitions. The parameter dependence of our CCFs is explicit, so we can explore how they depend on graph structure. Martingales are a powerful approach to study principal problems of EGT. Their applicability is invariant to the number of partitions in a graph, so we can study entire families of graphs simultaneously

    Noise-robust text-dependent speaker identification using cochlear models

    Get PDF
    One challenging issue in speaker identification (SID) is to achieve noise-robust performance. Humans can accurately identify speakers, even in noisy environments. We can leverage our knowledge of the function and anatomy of the human auditory pathway to design SID systems that achieve better noise-robust performance than conventional approaches. We propose a text-dependent SID system based on a real-time cochlear model called cascade of asymmetric resonators with fast-acting compression (CARFAC). We investigate the SID performance of CARFAC on signals corrupted by noise of various types and levels. We compare its performance with conventional auditory feature generators including mel-frequency cepstrum coefficients, frequency domain linear predictions, as well as another biologically inspired model called the auditory nerve model. We show that CARFAC outperforms other approaches when signals are corrupted by noise. Our results are consistent across datasets, types and levels of noise, different speaking speeds, and back-end classifiers. We show that the noise-robust SID performance of CARFAC is largely due to its nonlinear processing of auditory input signals. Presumably, the human auditory system achieves noise-robust performance via inherent nonlinearities as well

    Martingales and the fixation probability of high-dimensional evolutionary graphs

    No full text
    A principal problem of evolutionary graph theory is to find the probability that an initial mutant population will fix on a graph, i.e. that the mutants will eventually replace the indigenous population. This problem is particularly difficult when the dimensionality of a graph is high. Martingales can yield compact and exact expressions for the fixation probability of an evolutionary graph. Crucially, the tractability of martingales does not necessarily depend on the dimensionality of a graph. We will use martingales to obtain the exact fixation probability of graphs with high dimensionality, specifically k-partite graphs (or ‘circular flows’) and megastars (or ‘superstars’). To do so, we require that the edges of the graph permit mutants to reproduce in one direction and indigenous in the other. The resultant expressions for fixation probabilities explicitly show their dependence on the parameters that describe the graph structure, and on the starting position(s) of the initial mutant population. In particular, we will investigate the effect of funneling on the fixation probability of k-partite graphs, as well as the effect of placing an initial mutant in different partitions. These are the first exact and explicit results reported for the fixation probability of evolutionary graphs with dimensionality greater than 2, that are valid over all parameter space. It might be possible to extend these results to obtain fixation probabilities of high-dimensional evolutionary graphs with undirected or directed connections. Martingales are a formidable theoretical tool that can solve fundamental problems in evolutionary graph theory, often within a few lines of straightforward mathematics

    Using optimality to predict photoreceptor distribution in the retina

    No full text
    The concept of evolution implies that fitness traits of an organism tend toward some constrained optimality. Here, the fitness trait we consider is the distribution of photoreceptors on an organism's retina. We postulate that an organism's photoreceptor distribution optimizes some balance between two quantities, a benefit and a cost. The benefit is defined as the area of the field of vision. The cost is defined as the amount of time spent saccading to some target in the visual field; during this time we assume nothing is seen. Three constraints are identified. First, we assume proportional noise exists in the motor command. Second, we assume saccades are a noisy process. Third, we constrain the number of total photoreceptors. This simplified model fails to predict the human retinal photoreceptor distribution in full detail. Encouragingly, the photoreceptor distribution it predicts gets us closer to that goal. We discuss possible reasons for its current failure, and we suggest future research directions

    Predation and the origin of neurones

    No full text
    The core design of spiking neurones is remarkably similar throughout the animal kingdom. Their basic function as fast-signalling thresholding cells might have been established very early in their evolutionary history. Identifying the selection pressures that drove animals to evolve spiking neurones could help us interpret their design and function today. We review fossil, ecological and molecular evidence to investigate when and why animals evolved spiking neurones. Fossils suggest that animals evolved nervous systems soon after the advent of animal-on-animal predation, 550 million years ago (MYa). Between 550 and 525 MYa, we see the first fossil appearances of many animal innovations, including eyes. Animal behavioural complexity increased during this period as well, as evidenced by their traces, suggesting that nervous systems were an innovation of that time. Fossils further suggest that, before 550 MYa, animals were either filter feeders or microbial mat grazers. Extant sponges and Trichoplax perform these tasks using energetically cheaper alternatives than spiking neurones. Genetic evidence testifies that nervous systems evolved before the protostome-deuterostome split. It is less clear whether nervous systems evolved before the cnidarian-bilaterian split, so cnidarians and bilaterians might have evolved their nervous systems independently. The fossil record indicates that the advent of predation could fit into the window of time between those two splits, though molecular clock studies dispute this claim. Collectively, these lines of evidence indicate that animals evolved spiking neurones soon after they started eating each other. The first sensory neurones could have been threshold detectors that spiked in response to other animals in their proximity, alerting them to perform precisely timed actions, such as striking or fleeing

    Bayesian inference from single spikes

    Get PDF
    Spiking neurons appear to have evolved concurrently with the advent of animal-on-animal predation, near the onset of the Cambrian explosion 543 million years ago. We hypothesize that strong selection pressures of predator-prey interactions can explain the evolution of spiking neurons. The fossil record and molecular phylogeny indicate that animals existed without neurons for at least 100 million years prior to the Cambrian explosion. The first animals with nervous systems may have been derived sponge larvae that started feeding in the water column

    Optimal neural inference of stimulus intensities

    No full text
    In natural data, the class and intensity of stimuli are correlated. Current machine learning algorithms ignore this ubiquitous statistical property of stimuli, usually by requiring normalized inputs. From a biological perspective, it remains unclear how neural circuits may account for these dependencies in inference and learning. Here, we use a probabilistic framework to model class-specific intensity variations, and we derive approximate inference and online learning rules which reflect common hallmarks of neural computation. Concretely, we show that a neural circuit equipped with specific forms of synaptic and intrinsic plasticity (IP) can learn the class-specific features and intensities of stimuli simultaneously. Our model provides a normative interpretation of IP as a critical part of sensory learning and predicts that neurons can represent nontrivial input statistics in their excitabilities. Computationally, our approach yields improved statistical representations for realistic datasets in the visual and auditory domains. In particular, we demonstrate the utility of the model in estimating the contrastive stress of speech

    Martingales and fixation probabilities of evolutionary graphs

    No full text
    Evolutionary graph theory is the study of birth–death processes that are constrained by population structure. A principal problem in evolutionary graph theory is to obtain the probability that some initial population of mutants will fixate on a graph, and to determine how that fixation probability depends on the structure of that graph. A fluctuating mutant population on a graph can be considered as a random walk. Martingales exploit symmetry in the steps of a random walk to yield exact analytical expressions for fixation probabilities. They do not require simplifying assumptions such as large population sizes or weak selection. In this paper, we show how martingales can be used to obtain fixation probabilities for symmetric evolutionary graphs. We obtain simpler expressions for the fixation probabilities of star graphs and complete bipartite graphs than have been previously reported and show that these graphs do not amplify selection for advantageous mutations under all conditions

    Neurons equipped with intrinsic plasticity learn stimulus intensity statistics

    No full text
    Experience constantly shapes neural circuits through a variety of plasticity mechanisms. While the functional roles of some plasticity mechanisms are well-understood, it remains unclear how changes in neural excitability contribute to learning. Here, we develop a normative interpretation of intrinsic plasticity (IP) as a key component of unsupervised learning. We introduce a novel generative mixture model that accounts for the class-specific statistics of stimulus intensities, and we derive a neural circuit that learns the input classes and their intensities. We will analytically show that inference and learning for our generative model can be achieved by a neural circuit with intensity-sensitive neurons equipped with a specific form of IP. Numerical experiments verify our analytical derivations and show robust behavior for artificial and natural stimuli. Our results link IP to non-trivial input statistics, in particular the statistics of stimulus intensities for classes to which a neuron is sensitive. More generally, our work paves the way toward new classification algorithms that are robust to intensity variations
    corecore